Music Listening : What is in the Air ?
نویسنده
چکیده
The XX century is full of technological inventions that made the very idea of a listening device possible, from the early gramophones to the latest portable mini disk players. What evolutions can we predict for the listening devices of the future, and how these evolutions will change the way we access and listen to music ? In this chapter, we suggest that listening devices can be greatly enhanced by providing new forms of user controls which provide users with semantically preserving variations. These controls are intended to allow listeners different musical perceptions on a piece of music, by opposition to traditional listening, in which the musical media is played passively by some neutral device. The objective is both to increase the musical comfort of listeners, and, when possible, to provide listeners with smoother paths to new music (music they do not know, or do not like). This chapter illustrates this idea on a few examples of active listening projects conducted at Sony Computer Science Laboratory, Paris, based on the notion of constrained exploratory space. These constrained spaces suggest that the classical boundaries between composing, listening and mixing may be redefined, thereby assigning new roles to composers, sound engineers and listeners. 1. From Buttons to Exploration We propose the idea of exploratory listening environments, as a natural evolution in the history of musical controls. We first sketch a brief history of musical controls, and then introduce the notion of semantic-preserving musical exploratory environment. 1.1 History of Musical Controls Each technological advance has brought with it new forms of controls. The origins of listening machines with mass-produced musical materials may be traced back to the Phonograph, invented by Thomas Edison in 1878, which used tin foil cylinders, and shortly after the Gramophone, invented by Berliner in 1888, which used flat disks. In these devices, there was no control intentionally given to the user (se, e.g. Read & Welch, 1976). There was, however, an unintentional control in the Gramophone in that the horn could be turned around, thereby influencing the directivity of the sound source. Electricity soon began to be used for listening devices, both with radio and François Pachet, Active Listening : What is in the Air ? Sony CSL internal Report, 1999. 2 with new electrically recorded disk players in the 20s. The use of electricity also introduced new controls: the volume button and the treble/bass button. Juke-boxes were introduced in 1927, allowing listeners to select explicitly music titles from a given catalogue of disks, using various sorts of push buttons. The next big technological advance was the invention of binaural (stereo) recording method in 1931. The corresponding control was the panoramic button allowing to control the amount of signal in one loudspeaker or the other. Finally, digital format for audio introduced more controls, e.g. on the equalization of sound. In all these cases, technological advances were followed by the introduction of “technical” controls, i.e. controls operating directly on the technology (see Figure 1). Figure 1. A Phonograph (Edison, 1978, left); a Gramophone (Berliner, 1988, middle), a Rock-Ola 120-selection Juke-Box, and a Mini disk player (Sony, 1997, right). Advances in technology do not necessary imply more intelligent user control. 1.2 A Matter of Semantics The very notion of musical control raises the issue of semantics. The issue of musical semantics does music have meaning ? has been long debated by musicologists, leading to different theories, which usually paralleled the theories of semantics for languages. One of the main distinction made by theorists is the opposition between so-called “referentialists” and “absolutists”. Referentialists claim that musical meaning comes from actual references of musical forms to outside objects, i.e. music means something which is external to music itself. For instance, a particular scale in Indian music may have a reference to a particular human mood. Absolutists, e.g. Strawinsky, claim on the contrary that the meaning of music, if any, lies in music itself, i.e. in the relations entertained by musical forms together. Although these two viewpoints are not necessarily exclusive, as noted by Meyer (Meyer, 1956), they leave open much of the question of meaning. Eugene Narmour elaborated a much more precise theory of musical meaning based on the psychological notion of expectation (Narmour, 1992). In this theory, meaning occurs only when musical expectation are deceived. On the other hand, Rosen argues (Rosen, 1994) that the responsibility of preserving the meaning of a musical piece lies only in the performer itself, who has to choose carefully among a infinite set of possible interpretations which one is closest to the one “intended” by the composer. Without committing to one particular theory of musical meaning, we can note that meaning whatever it means has to do with choosing among a set of interpretations the “right one” or the “right ones”, i.e. those intended by the composer. A second remark is that the controls given by the history of sound recording technology have François Pachet, Active Listening : What is in the Air ? Sony CSL internal Report, 1999. 3 never had any concern about musical semantics: what does it mean to raise the sound level of a record ? to shift the signal to the left loudspeaker ? to increase the bass frequency ? Are the intentions of the composers, or even of the sound engineers, preserved in any way ? From this remark, we suggest that “interesting” musical controls should preserve some sort of semantics of the musical material, i.e. preserve intentions, whenever possible. We argue that more meaningful controls, in the context of modern digital multimedia technology, amount to shifting from traditional button-based technology to musical exploration spaces. 1.3 Music Interactivity As we have seen, technological buttons bear no semantics, because they are directly grounded on the technology, without any model of the music being played. But what can be such a model ? Interesting approaches in musical interactivity are the music notation systems, in the context of annotation of music documents, as in the works of Lepain (1998), or in the Acousmograph system (INA-GRM). In these systems, the primary issue addressed is not music listening per se, but rather music notation, i.e. how to represent graphically a musical document (the document itself or the perception of the document), or how to infer a model of the music which can be noted or represented graphically. Another answer may be found in the notion of open form, initially developed in literature (Eco, 1962), which has had much impact on music theory and composition (Stockhausen, Boulez). The idea of musical open form is that the composer does not create a ready-to-use score, but rather a set of potential performances, which can be seen as a model of scores, as explained by (Eckel, 1997): “Music is not any longer conceived in form of finite units but in terms of models capable of producing a potentially infinite number of variants of a particular family of musical ideas”. The selection or instantiation of the actual score to be played is delegated to the performer. In recent incarnations of open form, it is the listener himself who instantiates the model, as for instance in the Cave (Cruz-Neira et al., 1993) or CyberStage (Eckel, 1997). In these cases, the user is immersed in a realistic virtual environment, and has the control on his position and movement in a virtual world. His movements are translated into variations in the musical material being heard. These approaches may be considered as radical, in the sense that the user has a great deal of responsibility in making the music. However, the issue of semantics is not directly addressed, since the model in principle is under-designed, i.e. all possible explorations are always “licit”, whatever they may be. In this respect, there is a strong relation between open form virtual environments and programming languages for music composition, such as OpenMusic (Assayag et al., 1997), CommonMusic (Taube, 1991) or Elody (Orlarey et al. 1997). In these approaches indeed, the goal is to propose the user to explore spaces with as much freedom as possible, and not constrain the user in specific areas. 1.4 Active Listening Active Listening refers to the idea that listeners can be given some degree of control on the music they listen to, that gives the possibility of proposing different musical François Pachet, Active Listening : What is in the Air ? Sony CSL internal Report, 1999. 4 perceptions on a piece of music, by opposition to traditional listening, in which the musical media is played passively by some neutral device. The objective is both to increase the musical comfort of listeners, and, when possible, to provide listeners with smoother paths to new music (music they do not know, or do not like). Active listening is thus related to the notion of open form outlined above but differs by two important aspects: 1) we seek to create listening environments for existing music repertoires, rather than creating environments for composition or free musical exploration and 2) we aim at creating environments in which the variations always preserve the original semantics of the music, at least when this semantics can be defined precisely. For us, the issue if therefore not to introduce yet another technological button in the interface of the listening device, but rather to design buttons that “make sense”, thereby breaking the long tradition of technological buttons initiated by Edison. What “sense”, what “meaning” are we talking about ? How can music controls be designed to trigger semantic preserving actions ? The answer stems from the new landscape of music recording created by digital multimedia, sketched in the next section. We will then illustrate our ideas by two examples of active listening projects at Sony Computer Science Laboratory Paris. 2. The New Facts of Multimedia Digitalization of multimedia data has a number of technical advantages which are well known today: better sound quality, better compression, lossless copy, etc. The aim of this chapter is to show that digitalization of multimedia data also induce even in a still potential form a number of revolutions in the way music may be accessed and listened to by end users. We will outline three of these revolutions, which form the basis of our argumentation, focusing on the paradigmatic shifts they convey, rather than on technical aspects. 2.1 Structured Audio: Home as a Reconstruction Machine The idea of structured audio has initially been devised to allow better compression of high quality audio. Standardization efforts like the Mpeg-4 project embody this idea, and try to make it practical on a large scale (see, e.g. the Machine listening Group of the Media lab, Sheirer et al., 1998). The idea is simple: instead of transmitting a ready-to-listen sound, only a description of how to make the sound is transmitted. The actual sound is reconstructed at home, or at the listener’s location, provided of course he/she has the right software to process this reconstruction properly. Structured audio actually extends this basic idea to include fully-fledged scene descriptions, that is, not only descriptions of individual sounds, but description of groups of sounds playing together to make up a piece of music. The actual technical details of scene description also include all what is needed to reconstruct a sound or piece of music rightfully, e.g. effects, adaptation to the local sound reproduction system, and so forth. In our context, we argue that the notion of scene description opens up new doors for meaningful controls. Indeed, since the music is delivered as a “kit”, lots of possibilities can be imagined to influence the way the kit is actually built, according to François Pachet, Active Listening : What is in the Air ? Sony CSL internal Report, 1999. 5 user preferences. Of course, these variations around how the kit should be assembled have to be “coherent”, which are precisely the matter of our work. 2.2 Meta-data and All That Jazz The fact that musical data is now produced, coded and transmitted in a digital form has numerous and well-known advantages: better sound quality, possibility of lossless transmission and copying (thereby raising new copyright problems). An important non technical consequence is the possibility to encode not only the music itself the digitalized sound but also any sort of symbolic information. Such symbolic information may be used to code and transmit data on the music itself, so-called information on content, meta-data or also “bits about bits”. Why would one want to transmit such meta-data ? The interests are obvious in the context of document indexing. If musical data is accompanied with corresponding adequate descriptions, digital catalogues can then be accessed using sophisticated query systems. Current standardization efforts like Mpeg-7 embody this idea (MPEG7, 1998), and try to define standards for describing meta-data for all sorts of multimedia documents. MPEG-7 aims for instance at making the web more searchable for multimedia content than it is today, make large content archives accessible to the public. Here again, we would like to emphasize the conceptual rather than the technical aspects of this paradigm shift: meta-data opens also doors for imagining new listening systems in which the user may access data in a drastically different way. Instead of being a passive, neutral support, music becomes an active, self-documented knowledge base. Again, what kind of listening devices can be imagined that exploit this information ? 2.3 Size of Digital Catalogues Digitalization of multimedia data has yet another consequence: the availability of huge catalogues of multimedia data to users. In the case of music, there is, here also, a conceptual shift which has nothing to do with the technology of large databases. The main issue raised by this technological advance is how to access huge catalogues of music, not from a technical viewpoint, but from a user’s viewpoint. Recall the juke box, invented in the late 20s: a typical juke box would contain about 120 titles, which is the size of an average user’s discotheque. Browsing through all the titles was probably part of the pleasure, and selection could be made just like at home: by choosing one item out of a collection of items, which at least the user has seen once. Now a typical catalogue of a major company is about 50.000 items. What happens when the collection to select from is such a catalogue ? Even more terrifying, what happens if all the recorded titles become available through networks to users at home ? Estimating the total number of all recorded music is difficult, but it can be approximated to about 2 million titles (see, e.g. the size of MusicBoulevard or Amazon databases). The figure can be probably doubled to include non Western music. Every month, about 4000 new CDs are issued on the market. It is clearly impossible to apply usual techniques of music selection in this new context. What does it mean to “look for” a title when the mass of titles is so huge ? François Pachet, Active Listening : What is in the Air ? Sony CSL internal Report, 1999. 6 3. Spatialization: The MusicSpace Project The first parameter which comes to mind when thinking about user control on music is the spatialization of sound sources. We conduct a project for investigating the technical and conceptual issues related to meaningful user-control of music spatialization, called MusicSpace. 3.1 Motivation and Description of MusicSpace In MusicSpace, the user can listen to pieces of music using an interface in which each instrument in the piece is represented by a graphical object (see Figure 2). Moving these objects around modifies the mixing of sound sources in the global sound. Moreover, an object representing the listener himself avatar is also represented in the interface, so that all the mixing parameters (volume, panoramic position, etc.) are computed according to the avatar’s position. The basic system provides the possibility of 1) moving around the avatar, to induce a mixing as if the listener was moving around the actual musical setup, and 2) moving around the instruments themselves, thereby inducing a different mixing as if the listener was a sort of sound producer. Experimentations of this basic system were conducted on average listeners and music composers. It clearly appeared that although the physical actions of moving avatar or instrument icons around in a window are very similar, the possibility of moving around listener’s avatars is quite different conceptually than the possibility of moving around instruments. Indeed, moving the avatar corresponds to the action of moving oneself around a musical setting. Moving instruments correspond to a more technical view on the music the sound engineer’s view. This second possibility appeared to some users as heretic, since it practically gives users the possibility of totally changing the overall mixing of the musical piece ! The second phase of our project consisted in introducing a way of somehow constraining user actions, to avoid situations where the mixing produced is totally unrelated to the original spirit of the music (Pachet & Delerue, 1998). We proceeded by introducing a particular technique, called constraint perturbation, which precisely allows instruments to be linked together by relations that are always enforced: the system uses these constraints to propagate changes, so that the setup always remain consistent. For instance, a “related” constraint may be set between the drum and the bass, so that one of them is moved closer to the listener’s avatar, the other one is moved accordingly (with the same distance ratio). On the contrary, a “balance” constraint may be set between two sound sources that should always be mutually in opposition: for instance, when the chorusing instrument is brought closer, the accompaniment is moved away. These constraints can finally be composed together to create rich environments in which users may change the instrument positions, but the constraint system ensures that the overall mixing always remain consistent with the engineer or composer constraints. François Pachet, Active Listening : What is in the Air ? Sony CSL internal Report, 1999. 7 Figure 2. The interface of MusicSpace. Instruments are related by constraints. The avatar as well as instruments can be moved around by the user. The constraints embody an “automatic” sound engineer. 3.2 Exploration Space There are two ways to interpret MusicSpace. One is to see it as an embodiment simplistic but operational of a sound engineer: the user may move sounds using high level, simple actions; the system “corrects” these actions by moving other sound sources according to his knowledge of sound mixing. This knowledge is explicitly represented as constraints. The other viewpoint is to see mixing constraints as an ontology of mixing actions, which allows to mix in terms of properties of setups, rather than in terms of atomic actions on knobs and faders. This ontology allows to specify properties of configurations, which are guarantied to be always enforced, rather than specify explicit configurations. In this respect, constraints represent a semantics of sound source configuration, and the resulting constrained exploration space allows to explore various configurations without violating the spirit of the original mixing. MusicSpace is also to be seen as an example of exploitation of “reconstructed music”. As outlined in Section 2.1, future standards will deliver music by chunks, possibly transmitting sound sources separately, together with specifications on how to reconstruct the music whole from the parts. Constraints are one way of specifying this reconstruction, which nevertheless leaves room for new semantic-preserving user control. As such, it is a radically new form of Gramophone, as described in 1.1: not only does MusicSpace provide more refined controls on sound spatialization than turning the horn around, but these controls preserve the underlying intention of sound source configurations. 4. Music Catalogue Access The issue of music delivery concerns the transportation of music in a digital format to users. Music delivery has recently benefited from technological progress in network transmission, compression of audio, and protection of digital data (Memon & Wong, François Pachet, Active Listening : What is in the Air ? Sony CSL internal Report, 1999. 8 1998). These advances allow now or in the near future to deliver quickly and safely music to users in a digital format through networks, either internet or digital audio broadcasting. Moreover, as seen in Section 2.2, digitalization of data makes it possible today to transport information on content, and not only data itself. Together, these techniques give the users, at home, access to huge catalogues of annotated multimedia data, music in particular. These techniques aim at solving the distribution problem, i.e. how to transport data quickly and safely to users. Paradoxically, these technological advances also raise a new problem for the user: how to choose among such huge catalogues ? 4.1 Motivation and Ideas From the user viewpoint, accessing a large quantity of music indeed is problematic: it cannot be reduced to a simple database problem, because, by definition, users do not know precisely what they look for. The problem of choosing items is general in western societies, in which there is an ever increasing number of products available. For entertainment and specially music the choosing problem is specific, because the underlying goals personal enjoyment and excitement do not fall in the usual categories of rational decision making. Although understanding a user’s goals in listening to music is very complex in full generality, we can summarize the problem to two basic and contradictory ingredients: desire of repetition, and desire of surprise. The desire of repetition is well known in music theory and cognition. Experimental psychology shows the importance of repetitions in music. At the melodic or rhythmic levels of music “repetition breeds content”. For instance, sequences of repeating notes create expectations of the same note to occur. At a higher level, tonal music, for instance, is based on structures that create strong expectations or the next musical events to come (for instance, a dominant seventh chord creates an expectation of a resolution). Music theorists have tried to capture this phenomenon by proposing various theories of musical perception based on expectation mechanisms (see e.g. Meyer, 1956), particularly for modeling the perception of melodies (Narmour, 1992). At the more global level of music selection, this desire of repetition tends to have people wanting to listen music that they know already (and like) or music that is similar to music they already know. For instance, a Beatles fan will most probably be interested in listening the latest Beatles bootleg containing hitherto unreleased versions of his favorite hits. On the other hand, the desire for surprise is a key to understanding music, at all levels of perception. The very theories that emphasize the role of expectation in music also show that listeners do not favor expectations that are always fulfilled, and enjoy surprises and untypical musical progressions (see e.g. Smith and Melara, 1990). At a larger level, listeners want from time to time to discover new music, new titles, new bands, or new musical genres. This desire is not necessarily made explicit, but is nevertheless as important as the desire for repetition. Of course, these two desires are contradictory, and the issue in music selection is precisely to find the right compromise between these two forces: provide users with items they already know, and provide them with items they do not know, but will probably like. François Pachet, Active Listening : What is in the Air ? Sony CSL internal Report, 1999. 9 From the viewpoint of record companies, one goal of music delivery is to achieve a better exploitation of the catalogue. Indeed, record companies have problems with the exploitation of their catalogue using standard distribution schemes. For technical reasons, only a small part of the catalogue is actually “active”, i.e. proposed to users, in the form of easily available products. More importantly, the analysis of music sales shows clearly decreases in the sales of albums, and short-term policies based on selling lots of copies of a limited number of items (hits) seem to be no longer profitable. Additionally, the sales of general-purpose “samplers” (e.g. “Best of Love Songs”) are no longer profitable, either because users have already the hits in their own discotheque, or because they do not want to buy samplers in which they like only a fraction of the titles. Exploiting more fully the catalogues has become a necessity for record companies. Instead of proposing a small number of hits to a large audience, a natural solution is to increase diversity, by proposing more customized albums to users. 4.2 Approaches in Music Selection Current approaches in music selection can be split up in two categories: 1) query systems for accessing music catalogues, and 2) recommendation systems for proposing novel titles to users. In both cases, these approaches provide sets of items to the user, which he/she has still to choose from. Query systems address mainly database issues for storing and representing musical data. They propose means of querying musical items using some sort of semantic information. Various kinds of queries can be issued by users, either very specific (e.g. the title of the Beatles song which contains the word “pepper”), or largely under specified (e.g. “Jazz” titles). Collaborative filtering approaches (Shardanand, and Maes, 1995) aim primarily at achieving the “surprise” goal, i.e. issue recommendations of novel titles to users, with the hope that these recommendations will be enjoyed. Collaborative filtering is based on the idea that there are patterns in tastes tastes are not distributed uniformly. This idea can be implemented very simply by managing a so-called profile for each user connected to the service. The profile is typically a set of associations of items to grades. For instance, in the MyLaunch system, grades vary from 0 (I hate it) to 5 (this is my preferred item). In the recommendation phase, the system looks for all the agents having a similar profile the user’s. This similarity can be computed easily by a distance measure on profiles, such as a hamming distance. Finally, the system will look for items liked by these similar agents, which are not known by the user, and recommends these items to him/her. Typical collaborative filtering systems for music are the Firefly system (Firefly, 1998), MyLaunch (MyLaunch, 1998), the Amazon web site (Amazon, 1998), or the similarity engine (Infoglide, 1998). However, there are limitations to this approach. These limitations appear by studying quantitative simulations of collaborative filtering systems, using simulations techniques inspired from works on the dissemination of cultural tastes (Epstein, 1996; Cavalli-Sforza and Feldman, 1981) . The first one is the inclination to “cluster formation”, which is induced by the very dynamics of the system. The experimental results achieved so far show that such systems produce interesting recommendations for naïve profiles, but get stuck as soon François Pachet, Active Listening : What is in the Air ? Sony CSL internal Report, 1999. 10 as the profiles get bigger (about 120 items): eclectic profiles are somehow disadvantaged. Another problem, shown experimentally, is that the dynamics inherently favors the creation of hits, i.e. items which are liked by a huge fraction of the population. Of course, the existence of hits is not a bad thing in itself, but hits nevertheless limit the probability of other items to “survive” in a world dominated by weight sums. In short, collaborative filtering is a means of building similarity relations between items, based on statistical properties of groups of agents. As such, it addresses the goal of surprise, in a safe way, by proposing users items which are similar to already known ones. However, cluster formation and uneven distribution of chances to items (e.g. formation of hits) are the main drawbacks of the approach, both from the user’s viewpoint (clusters from which it is difficult to escape), and the content provider’s viewpoint (no systematic exploitation of the catalogue). 4.3 On-the-fly Music Program Generation The RecitalComposer Project (Pachet et al., 1999) is based on a radically different approach to music selection: instead of proposing users sets of individual titles, we propose to build fully-fledged music programs, i.e. sequences of music titles. There are several motivations for producing music programs, rather than unordered collections of titles. One is simply based on the recognition that music titles are rarely listened to in isolation: CDs, radio programs, concerts are all made up of temporal sequences of pieces, in a certain order. This order is most of the time significant, i.e. different orders do not produce the same impressions on listeners. In a way, the whole craft of music program selection is precisely to build coherent sequences, rather than simply select individual titles. The second motivation is that properties of sequences play an important role in the perception of music: for instance, several music titles in a similar style convey a particular atmosphere, and create expectations for the next coming titles. As a consequence, an individual title may not be particularly enjoyed by a listener in abstracto, but may be the right piece at the right time within a sequence. Rather than focusing on similarity of individual titles, we can exploit properties of sequences to satisfy the three goals of music selection. The proposal is therefore the following. First we build a database of titles, with content information for each title. Then we specify music programs by giving the properties or patterns we want the program to have. These properties are represented as constraints, in the sense of constraint satisfaction techniques. Finally, a constraint solver computes the solutions of the corresponding combinatorial pattern generation problem. The problem, as we define it, is therefore to build music programs, seen as temporal sequences of titles, in order to satisfy the three goals of music selection problem: repetition, surprise, and full exploitation of the catalogue. As an example, we will take a music program for which we specify the desired properties. In the next sections, we will focus on the format of the database and the nature of constraints. Here is a “liner-note” like description of a typical music program. The properties of the sequence may be grouped in three categories: 1) user preferences, 2) global properties on the coherence of sequences, and 3) constraints on the exploitation of the François Pachet, Active Listening : What is in the Air ? Sony CSL internal Report, 1999. 11 catalogue. The following example describes a music program called “Driving a Car”, ideally suited for listening to music in a car:
منابع مشابه
The effects of music and Holy Quran on patient’s anxiety and vital signs before abdominal surgery
Background: Anxiety is one of the emotional conditions among patients scheduled for surgery that can result in increasing postoperative pain, increasing analgesic and anesthetic requirements and prolonged hospital stay. Aim: To assess the effectiveness of listening to music and Holy Quran on patient’s anxiety and vital signs before abdominal surgery Method: it was a blind and three-group clinic...
متن کاملبررسی تاثیر موسیقی درمانی در محیط انتظار قبل از اعمال جراحی مختلف بر اضطراب و درد بیماران
Introduction: Millions of patients have been undergoing surgery around the world annually. Surgery is always a great experience for both the patient and his family and anxiety is a natural reaction to it. Preoperational anxiety stimulates the sympathetic, parasympathetic, and endocrine systems causing an increase in the heart rate, blood pressure, and heart excitability and arrhythmia. It cause...
متن کاملبررسی تأثیر 3 نوع موسیقی بر عملکرد حافظه کاری دانشجویان علوم پزشکی شهر تهران
Introduction: Cognitive ergonomics is a branch of ergonomics science that investigates the impact of environmental conditions on subjective and objective individual performance. Music is known as an effective environmental factor on human performance. The aim of present study is to examine the effect of three different types of music tempo. Method: In this study, students were randomly divided...
متن کاملListening Niches across a Century of Popular Music
This article investigates the contexts, or "listening niches", in which people hear popular music. The study spanned a century of popular music, divided into 10 decades, with participants born between 1940 and 1999. It asks about whether they know and like the music in each decade, and their emotional reactions. It also asks whether the music is associated with personal memories and, if so, wit...
متن کاملMusic Training Program: A Method Based on Language Development and Principles of Neuroscience to Optimize Speech and Language Skills in Hearing-Impaired Children
Introduction: In recent years, music has been employed in many intervention and rehabilitation program to enhance cognitive abilities in patients. Numerous researches show that music therapy can help improving language skills in patients including hearing impaired. In this study, a new method of music training is introduced based on principles of neuroscience and capabilities of Persian languag...
متن کامل